Goto

Collaborating Authors

 Birmingham


Major leap towards reanimation after death as mammal's brain preserved

New Scientist

Major leap towards reanimation after death as mammal's brain preserved A pig's brain has been frozen with its cellular activity locked in place and minimal damage. Could our brains one day be preserved in a way that locks in our thoughts, feelings and perceptions? An entire mammalian brain has been successfully preserved using a technique that will now be offered to people who are terminally ill. The intention is to preserve all the neural information thought necessary to one day reconstruct the mind of the person it once belonged to. "They would need to donate their brain and body for scientific research," says Borys Wróbel at Nectome in San Francisco, California, a research company focused on memory preservation.




Switching Temporary Teachers for Semi-Supervised Semantic Segmentation

Neural Information Processing Systems

The teacher-student framework, prevalent in semi-supervised semantic segmentation, mainly employs the exponential moving average (EMA) to update a single teacher's weights based on the student's. However, EMA updates raise a problem in that the weights of the teacher and student are getting coupled, causing a potential performance bottleneck. Furthermore, this problem may become more severe when training with more complicated labels such as segmentation masks but with few annotated data. This paper introduces Dual Teacher, a simple yet effective approach that employs dual temporary teachers aiming to alleviate the coupling problem for the student.







Don't Always Pick the Highest-Performing Model: An Information Theoretic View of LLM Ensemble Selection

Turkmen, Yigit, Buyukates, Baturalp, Bastopcu, Melih

arXiv.org Machine Learning

Large language models (LLMs) are often ensembled together to improve overall reliability and robustness, but in practice models are strongly correlated. This raises a fundamental question: which models should be selected when forming an LLM ensemble? We formulate budgeted ensemble selection as maximizing the mutual information between the true label and predictions of the selected models. Furthermore, to explain why performance can saturate even with many models, we model the correlated errors of the models using Gaussian-copula and show an information-theoretic error floor for the performance of the ensemble. Motivated by these, we propose a simple greedy mutual-information selection algorithm that estimates the required information terms directly from data and iteratively builds an ensemble under a query budget. We test our approach in two question answering datasets and one binary sentiment classification dataset: MEDMCQA, MMLU, and IMDB movie reviews. Across all datasets, we observe that our method consistently outperforms strong baselines under the same query budget.